27 research outputs found

    In the Privacy of Our Streets

    Get PDF
    If one lives in a city and wants to be by oneself or have a private conversation with someone else, there are two ways to set about it: either one finds a place of solitude, such as one’s bedroom, or one finds a place crowded enough, public enough, that attention to each person dilutes so much so as to resemble a deserted refuge. Often, one can get more privacy in public places than in the most private of spaces. The home is not always the ideal place to find privacy. Neighbours snoop, children ask questions, and family members judge. When the home suffocates privacy, the only escape is to go out, to the coffee shop, the public square. For centuries, city streets have been the true refuges of the solitaries, the overwhelmed, and the underprivileged. Yet time and again we hear people arguing that we do not have any claim to privacy while on the streets because they are part of the so-called public sphere. The main objective of this chapter is to argue that privacy belongs as much in the streets as it does in the home

    Views on Privacy. A Survey

    Get PDF
    The purpose of this survey was to gather individual’s attitudes and feelings towards privacy and the selling of data. A total (N) of 1,107 people responded to the survey. Across continents, age, gender, and levels of education, people overwhelmingly think privacy is important. An impressive 82% of respondents deem privacy extremely or very important, and only 1% deem privacy unimportant. Similarly, 88% of participants either agree or strongly agree with the statement that ‘violations to the right to privacy are one of the most important dangers that citizens face in the digital age.’ The great majority of respondents (92%) report having experienced at least one privacy breach. People’s first concern when losing privacy is the possibility that their personal data might be used to steal money from them. Interestingly, in second place in the ranking of concerns, people report being concerned about privacy because ‘Privacy is a good in itself, above and beyond the consequences it may have.’ People tend to feel that they cannot trust companies and institutions to protect their privacy and use their personal data in responsible ways. The majority of people believe that governments should not be allowed to collect everyone’s personal data. Privacy is thought to be a right that should not have to be paid for

    Three Things Digital Ethics Can Learn From Medical Ethics

    Get PDF
    Ethical codes, ethics committees, and respect for autonomy have been key to the development of medical ethics —elements that digital ethics would do well to emulate

    The Internet and Privacy

    Get PDF
    In this chapter I give a brief explanation of what privacy is, argue that protecting privacy is important because violations of the right to privacy can harm us individually and collectively, and offer some advice as to how to protect our privacy online

    Medical Privacy and Big Data: A Further Reason in Favour of Public Universal Healthcare Coverage

    Get PDF
    Most people are completely oblivious to the danger that their medical data undergoes as soon as it goes out into the burgeoning world of big data. Medical data is financially valuable, and your sensitive data may be shared or sold by doctors, hospitals, clinical laboratories, and pharmacies—without your knowledge or consent. Medical data can also be found in your browsing history, the smartphone applications you use, data from wearables, your shopping list, and more. At best, data about your health might end up in the hands of researchers on whose good will we depend to avoid abuses of power.2 Most likely, it will end up with data brokers who might sell it to a future employer, or an insurance company, or the government. At worst, your medical data may end up in the hands of criminals eager to commit extortion or identity theft. In addition to data harms related to exposure and discrimination, the collection of sensitive data by powerful corporations risks the creation of data monopolies that can dominate and condition access to health care. This chapter aims to explore the challenge that big data brings to medical privacy. Section I offers a brief overview of the role of privacy in medical settings. I define privacy as having one’s personal information and one’s personal sensorial space (what I call autotopos) unaccessed. Section II discusses how the challenge of big data differs from other risks to medical privacy. Section III is about what can be done to minimise those risks. I argue that the most effective way of protecting people from suffering unfair medical consequences is by having a public universal healthcare system in which coverage is not influenced by personal data (e.g., genetic predisposition, exercise habits, eating habits, etc.)

    What If Banks Were the Main Protectors of Customers’ Private Data?

    Get PDF
    In this article I argue that we are in urgent need for institutional guardianship and management of our personal data. I suggest banks may be in a good position to take on that role. Perhaps that's the future of banking

    Data, Privacy, and the Individual

    Get PDF
    The first few years of the 21st century were characterised by a progressive loss of privacy. Two phenomena converged to give rise to the data economy: the realisation that data trails from users interacting with technology could be used to develop personalised advertising, and a concern for security that led authorities to use such personal data for the purposes of intelligence and policing. In contrast to the early days of the data economy and internet surveillance, the last few years have witnessed a rising concern for privacy. As bad data practices have come to light, citizens are starting to understand the real cost of using online digital technologies. Two events stamped 2018 as a landmark year for privacy: the Cambridge Analytica scandal, and the implementation of the European Union’s General Data Protection Regulation (GDPR). The former showed the extent to which personal data has been shared without data subjects’ knowledge and consent and many times for unacceptable purposes, such as swaying elections. The latter inaugurated the beginning of robust data protection regulation in the digital age. Getting privacy right is one of the biggest challenges of this new decade of the 21st century. The past year has shown that there is still much work to be done on privacy to tame the darkest aspects of the data economy. As data scandals continue to emerge, questions abound as to how to interpret and enforce regulation, how to design new and better laws, how to complement regulation with better ethics, and how to find technical solutions to data problems. The aim of the research project Data, Privacy, and the Individual is to contribute to a better understanding of the ethics of privacy and of differential privacy. The outcomes of the project are seven research papers on privacy, a survey, and this final report, which summarises each research paper, and goes on to offer a set of reflections and recommendations to implement best practices regarding privacy

    Moral zombies: why algorithms are not moral agents

    Get PDF
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons

    Online Masquerade: Redesigning the Internet for Free Speech Through the Use of Pseudonyms

    Get PDF
    Anonymity promotes free speech by protecting the identity of people who might otherwise face negative consequences for expressing their ideas. Wrongdoers, however, often abuse this invisibility cloak. Defenders of anonymity online emphasise its value in advancing public debate and safeguarding political dissension. Critics emphasise the need for identifiability in order to achieve accountability for wrongdoers such as trolls. The problematic tension between anonymity and identifiability online lies in the desirability of having low costs (no repercussions) for desirable speech and high costs (appropriate repercussions) for undesirable speech. If we practice either full anonymity or identifiability, we end up having either low or high costs in all online contexts and for all kinds of speech. I argue that free speech is compatible with instituting costs in the form of repercussions and penalties for controversial and unacceptable speech. Costs can minimise the risks of anonymity by providing a reasonable degree of accountability. Pseudonymity is a tool that can help us regulate those costs while furthering free speech. This article argues that, in order to redesign the Internet to better serve free speech, we should shape much of it to resemble an online masquerade

    Chatbots shouldn’t use emojis

    Get PDF
    Limits need to be set on AI’s ability to simulate human feelings. Ensuring that chatbots don’t use emotive language, including emojis, would be a good start. Emojis are particularly manipulative. Humans instinctively respond to shapes that look like faces — even cartoonish or schematic ones — and emojis can induce these reactions
    corecore